17 research outputs found

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    Device effect on panoramic video+context tasks

    Get PDF
    Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.Engineering and Applied Science

    In-air gestures around unmodified mobile devices

    Get PDF
    International audience(Communication de la commission concernant les accords d'importance mineure qui ne restreignent pas sensiblement le jeu de la concurrence au sens de l'art. 81, § 1 du traité instituant la Communauté européenne (de minimis), JOCE C. 368, 22 déc. 2001, Déc. du 25 juill. 2001, Deutsche Post - interception de courrier transfrontière, JOCE L. 331, 15 déc. 2001

    Bitmap Movement Detection: HDR for Dynamic Scenes

    No full text
    Abstract—Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost. Keywords-Image processing; Image analysis; Image sequenc

    Bitmap Movement Detection: HDR for Dynamic Scenes

    No full text
    Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost

    MagTics: Flexible and thin form factor magnetic actuators for dynamic and wearable haptic feedback

    No full text
    We present MagTics, a novel flexible and wearable haptic interface based on magnetically actuated bidirectional tactile pixels (taxels). MagTics\u27 thin form factor and flexibility allows for rich haptic feedback in mobile settings. We propose a novel actuation mechanism based on bistable electromagnetic latching that combines high frame rate and holding force with low energy consumption and a soft and flexible form factor. We overcome limitations of traditional soft actuators by placing several hard actuation cells, driven by flexible printed electronics, in a soft 3D printed case. A novel EM-shielding prevents magnet-magnet interactions and allows for high actuator densities. A prototypical implementation comprising of 4 actuated pins on a 1.7 cm pitch, with 2 mm travel, and generating 160 mN to 200 mN of latching force is used to implement a number of compelling application scenarios including adding haptic and tactile display capabilities to wearable devices, to existing input devices and to provide localized haptic feedback in virtual reality. Finally, we report results of a psychophysical study, conducted to inform future developments and to identify possible application domains

    Convolutional Autoencoders for Human Motion Infilling

    No full text
    In this paper we propose a convolutional autoencoder to address the problem of motion infilling for 3D human motion data. Given a start and end sequence, motion infilling aims to complete the missing gap in between, such that the filled in poses plausibly forecast the start sequence and naturally transition into the end sequence. To this end, we propose a single, end-to-end trainable convolutional autoencoder. We show that a single model can be used to create natural transitions between different types of activities. Furthermore, our method is not only able to fill in entire missing frames, but it can also be used to complete gaps where partial poses are available (e.g. from end effectors), or to clean up other forms of noise (e.g. Gaussian). Also, the model can fill in an arbitrary number of gaps that potentially vary in length. In addition, no further post-processing on the model’s outputs is necessary such as smoothing or closing discontinuities at the end of the gap. At the heart of our approach lies the idea to cast motion infilling as an inpainting problem and to train a convolutional de-noising autoencoder on image-like representations of motion sequences. At training time, blocks of columns are removed from such images and we ask the model to fill in the gaps. We demonstrate the versatility of the approach via a number of complex motion sequences and report on thorough evaluations performed to better understand the capabilities and limitations of the proposed approach

    PanoInserts: Mobile Spatial Teleconferencing

    Get PDF
    white wall and a white-board, are tracked more crudely using a marker-based method. We present PanoInserts: a novel teleconferencing system that uses smartphone cameras to create a surround representation of meeting places. We take a static panoramic image of a location into which we insert live videos from smartphones. We use a combination of marker- and image-based tracking to position the video inserts within the panorama, and transmit this representation to a remote viewer. We conduct a user study comparing our system with fully-panoramic video and conventional webcam video conferencing for two spatial reasoning tasks. Results indicate that our system performs comparably with fully-panoramic video, and better than webcam video conferencing in tasks that require an accurate surrounding representation of the remote space. We discuss the representational properties and usability of varying video presentations, exploring how they are perceived and how they influence users when performing spatial reasoning tasks. Author Keywords Mixed reality; teleconferencing; telepresence; remot
    corecore